11 research outputs found

    Analysis and Simulation of Cerebellar Circuitry

    Get PDF
    The cerebellum, a fist-sized structure of the brain, plays a crucial part in the execution and coordination of motor control tasks and cognitive activities. It is also remarkably able to adapt itself to new tasks and circumstances whenever errors in motor output are made. Over the years, valuable insight on cerebellar functionality has been gained through the application of concepts traditionally associated with control theory. In this thesis, neural spike train data recorded from cerebellar neurons at rest under in vivo conditions were examined. The goal is to give an initial characterization of the spike firing properties of such neurons, using statistical point process theory. The findings indicate that the distributions of inter-spike intervals are skewed, and often could be approximated with either a gamma distribution or a lognormal distribution. Furthermore, it is found that several spike trains could be reasonably treated as renewal processes, while others require more complex description of their inter-spike interval dependency. Secondly, a Matlab model was developed to simulate a structure of cerebellar contribution to motor control of the vestibulo-ocular reflex. The cerebellar output is feedforwarded to a linear model of the oculomotor plant. A comparison between head velocity and eye velocity serving as a teaching signal to the cerebellum enables it to alter its functionality to better compensate for head movements. A simulation of a cerebellar circuitry output reveals that feedforwarding of the cerebellar output to the oculomotor plant through learning is able to improve the control performance, and offers a plausible explanation for motor control of the vestibulo-ocular reflex

    Control Strategies for Improving Cloud Service Robustness

    Get PDF
    This thesis addresses challenges in increasing the robustness of cloud-deployed applications and services to unexpected events and dynamic workloads. Without precautions, hardware failures and unpredictable large traffic variations can quickly degrade the performance of an application due to mismatch between provisioned resources and capacity needs. Similarly, disasters, such as power outages and fire, are unexpected events on larger scale that threatens the integrity of the underlying infrastructure on which an application is deployed.First, the self-adaptive software concept of brownout is extended to replicated cloud applications. By monitoring the performance of each application replica, brownout is able to counteract temporary overload situations by reducing the computational complexity of jobs entering the system. To avoid existing load balancers interfering with the brownout functionality, brownout-aware load balancers are introduced. Simulation experiments show that the proposed load balancers outperform existing load balancers in providing a high quality of service to as many end users as possible. Experiments in a testbed environment further show how a replicated brownout-enabled application is able to maintain high performance during overloads as compared to its non-brownout equivalent.Next, a feedback controller for cloud autoscaling is introduced. Using a novel way of modeling the dynamics of typical cloud application, a mechanism similar to the classical Smith predictor to compensate for delays in reconfiguring resource provisioning is presented. Simulation experiments show that the feedback controller is able to achieve faster control of the response times of a cloud application as compared to a threshold-based controller.Finally, a solution for handling the trade-off between performance and disaster tolerance for geo-replicated cloud applications is introduced. An automated mechanism for differentiating application traffic and replication traffic, and dynamically managing their bandwidth allocations using an MPC controller is presented and evaluated in simulation. Comparisons with commonly used static approaches reveal that the proposed solution in overload situations provides increased flexibility in managing the trade-off between performance and data consistency

    Model-Based Deadtime Compensation of Virtual Machine Startup Times

    Get PDF
    Scaling the amount of resources allocated to an application according to the actual load is a challenging problem in cloud computing. The emergence of autoscaling techniques allows for autonomous decisions to be taken when to acquire or release resources. The actuation of these decisions is however affected by time delays. Therefore, it becomes critical for the autoscaler to account for this phenomenon, in order to avoid over- or under-provisioning. This paper presents a delay-compensator inspired by the Smith predictor. The compensator allows one to close a simple feedback loop around a cloud application with a large, time-varying delay, preserving the stability of the controlled system. It also makes it possible for the closed-loop system to converge to a steady-state, even in presence of resource quantization. The presented approach is compared to a threshold-based controller with a cooldown period, that is typically adopted in industrial applications

    A control theoretical approach to non-intrusive geo-replication for cloud services

    Get PDF
    Complete data center failures may occur due to disastrous events such as earthquakes or fires. To attain robustness against such failures and reduce the probability of data loss, data must be replicated in another data center sufficiently geographically separated from the original data center. Implementing geo-replication is expensive as every data update operation in the original data center must be replicated in the backup. Running the application and the replication service in parallel is cost effective but creates a trade-off between potential replication consistency and data loss and reduced application performance due to network resource contention. We model this trade-off and provide a control-theoretical solution based on Model Predictive Control to dynamically allocate network bandwidth to accommodate the objectives of both replication and application data streams. We evaluate our control solution through simulations emulating the individual services, their traffic flows, and the shared network resource. The MPC solution is able to maintain a consistent performance over periods of persistent overload, and is quickly able to indiscriminately recover once the system return to a stable state. Additionally, the MPC balances the two objectives of consistency and performance according to the proportions specified in the objective function

    Control-theoretical load-balancing for cloud applications with brownout

    Get PDF
    Cloud applications are often subject to unexpected events like flash crowds and hardware failures. Without a predictable behaviour, users may abandon an unresponsive application. This problem has been partially solved on two separate fronts: first, by adding a self-adaptive feature called brownout inside cloud applications to bound response times by modulating user experience, and, second, by introducing replicas -- copies of the applications having the same functionalities -- for redundancy and adding a load-balancer to direct incoming traffic. However, existing load-balancing strategies interfere with brownout self-adaptivity. Load-balancers are often based on response times, that are already controlled by the self-adaptive features of the application, hence they are not a good indicator of how well a replica is performing. In this paper, we present novel load-balancing strategies, specifically designed to support brownout applications. They base their decision not on response time, but on user experience degradation. We implemented our strategies in a self-adaptive application simulator, together with some state-of-the-art solutions. Results obtained in multiple scenarios show that the proposed strategies bring significant improvements when compared to the state-of-the-art ones

    Stochastic Neural Firing Properties in Neurons of a Cerebellar Control System

    No full text
    The cerebellar system for the voluntary control of arm-hand movements involves a large number of different neuron types. These neurons are located both inside the cerebellum but also in extracerebellar brain structures, which provide processed motor and sensory information to the cerebellum. The various different neuron types have different morphology, receive different types, number and patterns of synaptic inputs and have different firing rates and kinetics. However, a common trait for all neurons is that their firing properties have clear stochastic components, which is evident in recordings from the neurons recorded in brain preparations when all synaptic inputs is removed (in vitro). Here, we take advantage of a unique, comprehensive database of the various neuron types present within the cerebellar arm-hand control system, recorded from the brain in vivo, to provide a comparative description of their spike firing patterns. Although the inter-spike intervals for most cell types in this system can be described by a simple type of distribution characteristic for stochastic neurons, it is only for a few exceptional cases that the consecutive inter-spike intervals are independent of each other. We conclude that the spike patterns of these neurons may be the result of multi-factorial sources of variability that include the patterns in the various synaptic inputs that neurons receive in vivo and the inherent stochasticity of spike generation

    Online Spike Detection in Cloud Workloads

    No full text
    We investigate methods for detection of rapid workload increases (load spikes) for cloud workloads. Such rapid and unexpected workload spikes are a main cause for poor performance or even crashing applications as the allocated cloud resources become insufficient. To detect the spikes early is fundamental to perform corrective management actions, like allocating additional resources, before the spikes become large enough to cause problems. For this, we propose a number of methods for early spike detection, based on established techniques from adaptive signal processing. A comparative evaluation shows, for example, to what extent the different methods manage to detect the spikes, how early the detection is made, and how frequently they falsely report spikes

    Compliance with guidelines for postoperative pain management in infants and children

    No full text
    Background: Postoperative pain is still often being inadequately assessed and/or recorded in infants and young children despite evidence-based guidelines. Objectives: This prospective, observational study in a paediatric postoperative ward at a Swedish university hospital was designed to evaluate interventional effects on pain management by briefly reminding nursing staff of corresponding local guidelines. Methods: Individual structured postoperative information on the first day and night after mainly otorhinolaryngeal or plastic surgery was obtained in 100 pediatric patients from on-site bedside observation protocols, patient records, and telephone interviews over two 5-week periods before and after a study intervention with brief systematic information on local guideline contents. Results: The intervention was followed by significantly more assessments (P = 0.0012), hourly assessments (P < 0.0001), and use of validated tools for assessment (P < 0.0001) of pain intensity in out-hospital patients, but by no change in guardian satisfaction. There were non-significant corresponding changes in in-hospital patients. Conclusions: Bedside compliance with guidelines for postoperative pain management can be considerably improved in out-hospital (and possibly also in-hospital) paediatric patients by a structured brief reminder of existing guideline contents. Larger prospective studies are required to determine the importance of bedside compliance with clinical guidelines for postoperative comfort and safety in infants and children

    Improving Cloud Service Resilience using Brownout-Aware Load-Balancing

    No full text
    We focus on improving resilience of cloud services (e.g., e-commerce website), when correlated or cascading failures lead to computing capacity shortage. We study how to extend the classical cloud service architecture composed of a load-balancer and replicas with a recently proposed self-adaptive paradigm called brownout. Such services are able to reduce their capacity requirements by degrading user experience (e.g., disabling recommendations). Combining resilience with the brownout paradigm is to date an open practical problem. The issue is to ensure that replica self-adaptivity would not confuse the load-balancing algorithm, overloading replicas that are already struggling with capacity shortage. For example, load-balancing strategies based on response times are not able to decide which replicas should be selected, since the response times are already controlled by the brownout paradigm. In this paper we propose two novel brownout-aware load-balancing algorithms. To test their practical applicability, we extended the popular lighttpd web server and load-balancer, thus obtaining a production-ready implementation. Experimental evaluation shows that the approach enables cloud services to remain responsive despite cascading failures. Moreover, when compared to Shortest Queue First (SQF), believed to be near-optimal in the non-adaptive case, our algorithms improve user experience by 5%, with high statistical significance, while preserving response time predictability

    Control-based load-balancing techniques : Analysis and performance evaluation via a randomized optimization approach

    No full text
    Cloud applications are often subject to unexpected events like flashcrowds and hardware failures. Users that expect a predictable behavior may abandon an unresponsive application when these events occur. Researchers and engineers addressed this problem on two separate fronts: first, they introduced replicas - copies of the application with the same functionality - for redundancy and scalability; second, they added a self-adaptive feature called brownout inside cloud applications to bound response times by modulating user experience. The presence of multiple replicas requires a dedicated component to direct incoming traffic: a load-balancer. Existing load-balancing strategies based on response times interfere with the response time controller developed for brownout-compliant applications. In fact, the brownout approach bounds response times using a control action. Hence, the response time, that was used to aid load-balancing decision, is not a good indicator of how well a replica is performing. To fix this issue, this paper reviews some proposal for brownout-aware load-balancing and provides a comprehensive experimental evaluation that compares them. To provide formal guarantees on the load balancing performance, we use a randomized optimization approach and apply the scenario theory. We perform an extensive set of experiments on a real machine, extending the popular lighttpd web server and load-balancer, and obtaining a production-ready implementation. Experimental results show an improvement of the user experience over Shortest Queue First (SQF)-believed to be near-optimal in the non-adaptive case. The improved user experience is obtained preserving the response time predictability
    corecore